Skip to content

Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for…#228

Closed
AnkushMalaker wants to merge 6 commits intodevfrom
fix/open-memory-fork
Closed

Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for…#228
AnkushMalaker wants to merge 6 commits intodevfrom
fix/open-memory-fork

Conversation

@AnkushMalaker
Copy link
Collaborator

@AnkushMalaker AnkushMalaker commented Dec 22, 2025

…k, add setup script for cloning, and enhance README with setup instructions.

Summary by CodeRabbit

  • New Features

    • Switched default memory backend to a new provider and added Neo4j-backed graph support.
    • UI endpoint moved to port 3333; additional service endpoints exposed for graph and admin.
  • Chores

    • Deployment scripts now build images before startup and include health checks.
    • New environment variables for service credentials and connectors; added a service timeout setting.
  • Documentation

    • Updated setup and troubleshooting steps with revised ports and startup workflow.

✏️ Tip: You can customize this high-level summary in your review settings.

…k, add setup script for cloning, and enhance README with setup instructions.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 22, 2025

Walkthrough

Switched memory provider to OpenMemory MCP and updated memory-related config; added Mycelia timeout; replaced/expanded OpenMemory MCP extras (env template, docker-compose, README, setup/run scripts) to use a mem0 fork, Neo4j and Qdrant, and updated service ports and networking; adjusted services.py service metadata and env checks.

Changes

Cohort / File(s) Summary
Memory config
config.yml
memory.provider changed chronicleopenmemory_mcp; memory.openmemory_mcp.server_url http://localhost:8765http://host.docker.internal:8765; memory.openmemory_mcp.user_id defaultopenmemory; removed surrounding quotes from memory.extraction.prompt; added memory.mycelia.timeout: 30 (keeps memory.mycelia.api_url).
OpenMemory MCP env & ignore
extras/openmemory-mcp/.env.template, extras/openmemory-mcp/.gitignore
.env.template adds OPENMEMORY_USER_ID (openmemory), API_KEY, frontend NEXT_PUBLIC_* keys, Neo4j and Qdrant vars, and keeps OPENAI_API_KEY; .gitignore adds mem0-fork/ and .env.
OpenMemory MCP README
extras/openmemory-mcp/README.md
Rewrote quickstart and setup to use Ushadow-io/mem0 fork; changed workflow to setup.sh, updated ports/endpoints (UI 3333, Qdrant 6333, Neo4j 7474/7687), added troubleshooting and port notes, and updated architecture/commands.
Docker compose and services
extras/openmemory-mcp/docker-compose.yml
Replaced service images with fork images (ghcr.io/ushadow-io/...), added neo4j service, introduced Neo4j volumes, added env vars (USER, API_KEY, QDRANT_URL, NEO4J_*), expanded networks to mem0-network, updated healthchecks, changed openmemory-ui port mapping to 3333:3000, and adjusted dependencies.
Setup & run scripts
extras/openmemory-mcp/setup.sh, extras/openmemory-mcp/run.sh
setup.sh reworked to create/ensure .env (from template or minimal), prompt securely for OPENAI_API_KEY, set USER to openmemory, and print updated endpoints; run.sh now runs docker compose up --build -d $PROFILE and logs build notice.
Service registry / tooling
services.py
Updated openmemory-mcp service ports list to include 8765, 7474, 7687, 3333; simplified env-file existence check to verify a .env in each service root (consolidated backend branching).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • 0xrushi

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The pull request title is truncated and incomplete, ending with 'for…' without clearly stating what the configuration is being updated for. Complete the title to clearly express the main objective, e.g., 'Update OpenMemory MCP configuration to use Ushadow-io/mem0 fork and add Docker Compose setup.'
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/open-memory-fork

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@AnkushMalaker AnkushMalaker changed the title Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for… [DO NOT MERGE] Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for… Dec 22, 2025
Copy link
Contributor

@thestumonkey thestumonkey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the right compose values from openmemorry/docker compose
Should probably submodule in instead of cloning.

the folder naming is bad. why is it openmemory-mcp? it's not an mcp server.
"mem0-fork" is a terrible name.
can submodule it in or call it mem0-ushadow-fork

openmemory-mcp:
image: mem0/openmemory-mcp:latest
build:
context: ./mem0-fork/openmemory/api
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this coming from? doens't match file system

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#228 (comment) as explained here, its created during setup.sh flow

echo "======================"

# Clone the mem0 fork if not already present
MEM0_DIR="mem0-fork"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this is fine for testing, I think that this is confusing and should just be in the openmemeory folder, as we already have many mem types

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is in the openmemory-mcp folder, I will rename the root level service as extras/openmemory
Is that what you're saying?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rename in another PR as it'll show too many changes in this one


# OpenMemory MCP Server (official Docker image)
# OpenMemory MCP Server (built from Ushadow-io/mem0 fork)
openmemory-mcp:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to use the compose file from the fork

@AnkushMalaker
Copy link
Collaborator Author

why is it openmemory-mcp? it's not an mcp server. "mem0-fork" is a terrible name
Oh this if from before, haven't changed the root name. I can change root service extras/openmemory-mcp too. I'm realizing maybe you haven't been using this service.

I should just be able to "run" your built docker image instead of having to clone and build it, yes- but, between cloning and submodule, I'd like submodule if it was more permanent or files in this project required access from that fork.
We need it during build time so clone works I think.

Clarifying the flow

Using the wizard.sh flow, the services setups are run.
So if you run wizard and say "configure openmemory", it will run the setup here. This setup will clone the fork into mem0-fork. I think I can just clone it as mem0 actually, now that you point naming.

@AnkushMalaker
Copy link
Collaborator Author

@thestumonkey

- Changed memory provider from `chronicle` to `openmemory_mcp` in `config.yml`.
- Updated server URL for OpenMemory MCP to use `http://host.docker.internal:8765`.
- Modified service path in `services.py` to reflect new directory structure.
- Enhanced `.env.template` with additional configuration options for OpenMemory.
- Removed obsolete `docker-compose.yml` file for OpenMemory MCP.
- Updated README with new service start instructions and port configurations.
- Improved setup script to configure the `.env` file correctly for OpenMemory.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
extras/openmemory-mcp/run.sh (1)

58-58: Fix missing directory path in docker compose execution.

The run.sh script executes docker compose up from extras/openmemory-mcp/, but the docker-compose.yml is located in mem0-fork/openmemory/. Add a directory change before executing docker compose:

# After line 50, before docker compose up:
cd mem0-fork/openmemory

Alternatively, use the -f flag to specify the compose file path:

docker compose -f mem0-fork/openmemory/docker-compose.yml up --build -d $PROFILE

Without this fix, docker compose up will fail to find the docker-compose.yml file. Additionally, after applying this fix, the grep pattern openmemory-mcp will correctly match the fork's service container names.

🧹 Nitpick comments (2)
extras/openmemory-mcp/README.md (1)

94-102: Consider wrapping bare URLs in angle brackets.

For better Markdown compatibility, wrap bare URLs in angle brackets to prevent potential parsing issues.

🔎 Proposed fix
-  - API Docs: http://localhost:8765/docs
+  - API Docs: <http://localhost:8765/docs>

-- **Qdrant Dashboard**: http://localhost:6333/dashboard
+- **Qdrant Dashboard**: <http://localhost:6333/dashboard>

-- **Neo4j Browser**: http://localhost:7474
+- **Neo4j Browser**: <http://localhost:7474>

-- **OpenMemory UI**: http://localhost:3333
+- **OpenMemory UI**: <http://localhost:3333>
extras/openmemory-mcp/setup.sh (1)

86-100: Consider adding cleanup for temporary file.

The temporary file created with mktemp may not be cleaned up if the script is interrupted between creation and mv. Consider adding a trap to ensure cleanup.

🔎 Proposed fix
+# Cleanup trap for temporary files
+cleanup() {
+    rm -f "$temp_file" 2>/dev/null
+}
+trap cleanup EXIT
+
 # Update .env file safely using awk - replace existing line or append if missing
 temp_file=$(mktemp)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dc70ca2 and 9de6466.

📒 Files selected for processing (8)
  • config.yml
  • extras/openmemory-mcp/.env.template
  • extras/openmemory-mcp/.gitignore
  • extras/openmemory-mcp/README.md
  • extras/openmemory-mcp/docker-compose.yml
  • extras/openmemory-mcp/run.sh
  • extras/openmemory-mcp/setup.sh
  • services.py
💤 Files with no reviewable changes (1)
  • extras/openmemory-mcp/docker-compose.yml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Use Black formatter with 100-character line length for Python code
Use isort for Python import organization
ALL imports must be at the top of the file after the docstring - never import modules in the middle of functions or files
Group imports in Python files: standard library, third-party, then local imports
Use lazy imports sparingly and only when absolutely necessary for circular import issues in Python
Always raise errors in Python, never silently ignore - use explicit error handling with proper exceptions rather than silent failures
Avoid defensive hasattr() checks in Python - research and understand input/response or class structure instead

Files:

  • services.py
🧠 Learnings (3)
📚 Learning: 2025-12-20T01:11:51.495Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T01:11:51.495Z
Learning: Check if src/ is volume mounted in Docker - if not, run `docker compose build` so code changes are reflected (do not simply run `docker compose restart`)

Applied to files:

  • extras/openmemory-mcp/run.sh
📚 Learning: 2025-12-20T01:11:51.495Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T01:11:51.495Z
Learning: Use `docker compose build` without `--no-cache` by default for faster builds - only use `--no-cache` when explicitly needed for cached layer issues or troubleshooting

Applied to files:

  • extras/openmemory-mcp/run.sh
📚 Learning: 2025-12-08T23:52:34.959Z
Learnt from: AnkushMalaker
Repo: chronicler-ai/chronicle PR: 178
File: backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py:218-223
Timestamp: 2025-12-08T23:52:34.959Z
Learning: In Python code (chronicle project), prefer logging.exception() inside except blocks to automatically log the full stack trace. When re-raising exceptions, always chain with 'raise ... from e' to preserve the original context; use 'raise ... from None' only if you explicitly want to suppress the context. This improves debuggability across Python files.

Applied to files:

  • services.py
🪛 dotenv-linter (4.0.0)
extras/openmemory-mcp/.env.template

[warning] 9-9: [UnorderedKey] The OPENMEMORY_USER_ID key should go before the USER key

(UnorderedKey)

🪛 markdownlint-cli2 (0.18.1)
extras/openmemory-mcp/README.md

38-38: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


44-44: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


94-94: Bare URL used

(MD034, no-bare-urls)


96-96: Bare URL used

(MD034, no-bare-urls)


98-98: Bare URL used

(MD034, no-bare-urls)


102-102: Bare URL used

(MD034, no-bare-urls)

🔇 Additional comments (12)
extras/openmemory-mcp/.gitignore (1)

1-3: LGTM!

The .gitignore entries appropriately exclude the cloned fork directory and environment files containing secrets.

extras/openmemory-mcp/run.sh (1)

50-51: LGTM!

The --build flag addition aligns with the fork-based workflow, ensuring images are built from source. The informative message provides useful context. Based on learnings, using --build without --no-cache is the correct approach for faster builds.

extras/openmemory-mcp/.env.template (1)

7-20: LGTM!

The template is well-organized with clear section comments. The logical grouping of related variables (USER before OPENMEMORY_USER_ID) takes precedence over alphabetical ordering. Neo4j credentials in comments are appropriate for a local development template.

services.py (2)

36-41: LGTM!

The path update to extras/openmemory-mcp/mem0-fork/openmemory aligns with the fork-based workflow introduced by setup.sh. The expanded port list correctly reflects the new services (Neo4j browser 7474, Neo4j Bolt 7687, UI 3333).


48-54: The .env path handling for openmemory-mcp is correct. The official openmemory documentation confirms the .env file is located in the api/ subdirectory, which matches the code's check for (service_path / 'api' / '.env').exists() when service_name == 'openmemory-mcp'.

config.yml (2)

188-188: LGTM!

The provider switch from chronicle to openmemory_mcp aligns with the PR objectives.


196-198: LGTM!

The host.docker.internal URL is correct for Docker containers accessing host services, and the user_id: openmemory aligns with the defaults in .env.template.

extras/openmemory-mcp/README.md (2)

5-6: LGTM!

Clear note about using the fork, which sets expectations appropriately.


36-51: LGTM!

Good documentation structure with both unified service manager and manual options. The first-build timing note is helpful for user expectations.

extras/openmemory-mcp/setup.sh (3)

1-4: LGTM!

Good use of strict error handling with set -euo pipefail.


25-43: LGTM!

Good fork management with conditional cloning and optional update prompt. The error handling on clone failure is appropriate.


113-124: LGTM!

Clear post-setup instructions with all relevant endpoints and configuration guidance for both Docker and local development.

Comment on lines +192 to 194
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details hehehe. Keep items concise and useful.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove debug text from extraction prompt.

The prompt contains "hehehe" which appears to be leftover debug/test text and should be removed before merging.

🔎 Proposed fix
     prompt: Extract important information from this conversation and return a JSON
       object with an array named "facts". Include personal preferences, plans, names,
-      dates, locations, numbers, and key details hehehe. Keep items concise and useful.
+      dates, locations, numbers, and key details. Keep items concise and useful.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details hehehe. Keep items concise and useful.
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details. Keep items concise and useful.
🤖 Prompt for AI Agents
In config.yml around lines 192 to 194, the prompt string includes leftover debug
text "hehehe"; remove that token so the prompt reads cleanly (e.g., "Extract
important information from this conversation and return a JSON object with an
array named \"facts\". Include personal preferences, plans, names, dates,
locations, numbers, and key details. Keep items concise and useful."). Update
the file by deleting the "hehehe" text and fixing any surrounding
whitespace/punctuation so the sentence remains grammatically correct, then run a
quick lint/check to ensure YAML validity.

- Introduced a new `docker-compose.yml` file to manage services for OpenMemory MCP, including Qdrant and Neo4j.
- Enhanced `.env.template` with additional configuration options for Neo4j and Qdrant.
- Updated `setup.sh` to reflect the use of pre-built Docker images and simplified the setup process.
- Improved user instructions for starting services and accessing various components.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
extras/openmemory-mcp/.env.template (1)

19-23: Critical: Same hardcoded password issue applies here.

This template contains the same hardcoded Neo4j password flagged in docker-compose.yml. The comment "Default credentials" suggests these are meant to be changed, but:

  • There's no validation requiring users to change it
  • The setup scripts don't enforce or prompt for a secure password
  • Users may overlook this and deploy with the default

Ensure your setup script (setup.sh) validates that users have changed this password or generates a secure random password automatically. See the recommendations in the docker-compose.yml review.

🧹 Nitpick comments (3)
extras/openmemory-mcp/docker-compose.yml (1)

15-15: Pin Docker image versions instead of using :latest tags.

All services use :latest tags, which reduces reproducibility and can introduce unexpected breaking changes or security vulnerabilities when images are pulled at different times.

🔎 Recommended approach

Pin specific image versions or use commit SHAs for the fork images:

-    image: ghcr.io/ushadow-io/u-mem0-api:latest
+    image: ghcr.io/ushadow-io/u-mem0-api:v1.0.0  # or specific commit SHA

-    image: ghcr.io/ushadow-io/u-mem0-ui:latest
+    image: ghcr.io/ushadow-io/u-mem0-ui:v1.0.0

-    image: neo4j:latest
+    image: neo4j:5.15.0  # or another stable version

Also applies to: 44-44, 58-58

extras/openmemory-mcp/.env.template (2)

8-9: Consider consolidating redundant user identifier variables.

Both USER and OPENMEMORY_USER_ID are set to the same value. If they serve the same purpose, consider using a single variable to reduce configuration complexity and potential inconsistencies.

If they are genuinely used by different components, add a comment explaining the distinction:

-# Optional: User identifier (defaults to 'openmemory')
+# Optional: User identifier for MCP server (defaults to 'openmemory')
 USER=openmemory
+# User identifier for OpenMemory frontend
 OPENMEMORY_USER_ID=openmemory

1-26: Static analysis: Optional key ordering suggestions.

The dotenv-linter tool suggests reordering keys alphabetically (e.g., OPENMEMORY_USER_ID before USER, NEO4J_PASSWORD before NEO4J_URL). While this is a stylistic preference that can improve consistency, it's not critical for functionality.

You can safely ignore these warnings if you prefer logical grouping over alphabetical ordering. However, if you want to satisfy the linter:

🔎 Alphabetically ordered version
-USER=openmemory
 OPENMEMORY_USER_ID=openmemory
+USER=openmemory

-NEO4J_URL=bolt://neo4j:7687
-NEO4J_USERNAME=neo4j
 NEO4J_PASSWORD=taketheredpillNe0
+NEO4J_URL=bolt://neo4j:7687
+NEO4J_USERNAME=neo4j
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9de6466 and 658615f.

📒 Files selected for processing (4)
  • extras/openmemory-mcp/.env.template
  • extras/openmemory-mcp/docker-compose.yml
  • extras/openmemory-mcp/setup.sh
  • services.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • extras/openmemory-mcp/setup.sh
  • services.py
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
extras/openmemory-mcp/.env.template

[warning] 9-9: [UnorderedKey] The OPENMEMORY_USER_ID key should go before the USER key

(UnorderedKey)


[warning] 23-23: [UnorderedKey] The NEO4J_PASSWORD key should go before the NEO4J_URL key

(UnorderedKey)

🔇 Additional comments (1)
extras/openmemory-mcp/docker-compose.yml (1)

36-36: Verify healthcheck tool availability in container images.

The neo4j:latest image includes wget in its default configuration, so the healthcheck on line 81 is valid. However, the ghcr.io/ushadow-io/u-mem0-api:latest image cannot be independently verified (not publicly discoverable), though as a FastAPI Python application it necessarily includes python3 for execution. If the u-mem0-api image differs significantly from expected Python base images, consider alternative healthchecks using curl or direct endpoint testing.

Comment on lines +23 to +25
- NEO4J_URL=${NEO4J_URL:-bolt://neo4j:7687}
- NEO4J_USERNAME=${NEO4J_USERNAME:-neo4j}
- NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Hardcoded Neo4j password is a major security vulnerability.

The password taketheredpillNe0 is hardcoded in both the docker-compose file and .env.template, and Neo4j is exposed on ports 7474 and 7687. This creates a significant security risk since:

  • The password is publicly visible in the repository
  • Neo4j is accessible from the host network
  • An attacker could easily discover and exploit this credential
🔎 Recommended fixes

Option 1 (Recommended): Remove defaults entirely and require users to set secure passwords:

-      - NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0}
+      - NEO4J_PASSWORD=${NEO4J_PASSWORD:?NEO4J_PASSWORD must be set in .env}

-      - NEO4J_AUTH=neo4j/taketheredpillNe0
+      - NEO4J_AUTH=${NEO4J_USERNAME:-neo4j}/${NEO4J_PASSWORD:?NEO4J_PASSWORD required}

Option 2: Generate a random password during setup:

Add to your setup script:

# Generate secure random password if not set
if [ -z "$NEO4J_PASSWORD" ]; then
  NEO4J_PASSWORD=$(openssl rand -base64 32)
  echo "NEO4J_PASSWORD=$NEO4J_PASSWORD" >> .env
fi

Option 3: Use Docker secrets (for production):

secrets:
  neo4j_password:
    file: ./secrets/neo4j_password.txt
environment:
  - NEO4J_AUTH=neo4j/run/secrets/neo4j_password

Also update the README to emphasize that users must change the default password before deployment.

Also applies to: 65-65

🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml (around lines 23-25 and line 65)
there is a hardcoded Neo4j password; remove that default value so the compose
file does not provide a fallback password, require NEO4J_PASSWORD to be supplied
from the environment (and update .env.template to leave it empty), and update
README to instruct users to set a strong password before deployment; for
production recommend using Docker secrets or an init script that generates and
stores a random password if none is provided, and avoid binding Neo4j to the
host network/ports by default unless explicitly configured for local dev.

Comment on lines +69 to +73
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security risk: Unrestricted procedures and file operations enabled.

Lines 69-73 enable APOC file import/export and mark all APOC/GDS procedures as unrestricted. This configuration:

  • Allows procedures to read/write files on the host filesystem through volume mounts
  • Bypasses Neo4j's security sandboxing
  • Could be exploited if an attacker gains access to execute Cypher queries

Consider restricting procedures to only those explicitly needed:

-      - NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
+      - NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.*

If full unrestricted access is genuinely required, document why in a comment and add authentication/network isolation safeguards.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.*
🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml around lines 69-73, the compose file
currently enables APOC file import/export and marks all apoc/gds procedures as
unrestricted which exposes host file access and bypasses Neo4j sandboxing;
change these env vars to only enable the specific APOC/GDS procedures you need
(remove the broad NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.* and
instead list exact procedure names), disable file import/export unless required
(remove or set
NEO4J_apoc_import_file_enabled/NEO4J_apoc_export_file_enabled=false) or
constrain import paths to allowed directories, and if you truly need
unrestricted access add an inline comment documenting why plus enforce stronger
authentication and network isolation (e.g., limit container network access and
require strong credentials).

@AnkushMalaker AnkushMalaker changed the title [DO NOT MERGE] Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for… Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for… Dec 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants